3 research outputs found

    An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information

    Get PDF
    The explanatory capacity of interpretable fuzzy rule-based classifiers is usually limited to offering explanations for the predicted class only. A lack of potentially useful explanations for non-predicted alternatives can be overcome by designing methods for the so-called counterfactual reasoning. Nevertheless, state-of-the-art methods for counterfactual explanation generation require special attention to human evaluation aspects, as the final decision upon the classification under consideration is left for the end user. In this paper, we first introduce novel methods for qualitative and quantitative counterfactual explanation generation. Then, we carry out a comparative analysis of qualitative explanation generation methods operating on (combinations of) linguistic terms as well as a quantitative method suggesting precise changes in feature values. Then, we propose a new metric for assessing the perceived complexity of the generated explanations. Further, we design and carry out two human evaluation experiments to assess the explanatory power of the aforementioned methods. As a major result, we show that the estimated explanation complexity correlates well with the informativeness, relevance, and readability of explanations perceived by the targeted study participants. This fact opens the door to using the new automatic complexity metric for guiding multi-objective evolutionary explainable fuzzy modeling in the near futureIlia Stepin is an FPI researcher (grant PRE2019-090153). Jose M. Alonso-Moral is a Ramon y Cajal researcher (grant RYC-2016–19802). This work was supported by the Spanish Ministry of Science and Innovation (grants RTI2018-099646-B-I00, PID2021-123152OB-C21, and TED2021-130295B-C33) and the Galician Ministry of Culture, Education, Professional Training and University (grants ED431F2018/02, ED431G2019/04, and ED431C2022/19). All the grants were co-funded by the European Regional Development Fund (ERDF/FEDER program).S

    Argumentative Conversational Agents for Explainable Artificial Intelligence

    No full text
    Recent years have witnessed a striking rise of artificial intelligence algorithms that are able to show outstanding performance. However, such good performance is oftentimes achieved at the expense of explainability. Not only can the lack of algorithmic explainability undermine the user's trust in the algorithmic output, but it can also cause adverse consequences. In this thesis, we advocate the use of interpretable rule-based models that can serve both as stand-alone applications and proxies for black-box models. More specifically, we design an explanation generation framework that outputs contrastive, selected, and social explanations for interpretable (decision trees and rule-based) classifiers. We show that the resulting explanations enhance the effectiveness of AI algorithms while preserving their transparent structure

    Investigating Human-Centered Perspectives in Explainable Artificial Intelligence

    No full text
    The widespread use of Artificial Intelligence (AI) in various domains has led to a growing demand for algorithmic understanding, transparency, and trustworthiness. The field of eXplainable AI (XAI) aims to develop techniques that can inspect and explain AI systems’ behaviour in a way that is understandable to humans. However, the effectiveness of explanations depends on how users perceive them, and their acceptability is connected with the level of understanding and compatibility with users’ existing knowledge. So far, researchers in XAI have primarily focused on technical aspects of explanations, mostly without considering users’ needs, and this aspect is necessary to consider for a trustworthy AI. In the meantime, there is a growing interest in human-centered approaches that focus on the intersection between AI and human-computer interaction, what is termed as human-centered XAI (HC-XAI). HC-XAI explores methods to achieve user satisfaction, trust, and acceptance for XAI systems. This paper presents a systematic survey on HC-XAI, reviewing 75 papers from various digital libraries. The contributions of this paper include: (1) identifying common human-centered approaches, (2) providing readers with insights into design perspectives of HC-XAI approaches, and (3) categorising with quantitative and qualitative analysis of all the papers under study. The findings stimulate discussions and shed light on ongoing and upcoming research in HC-XAI
    corecore